Is AI the gorilla in the room?

 

 

Is AI the Gorilla in the Room?

Not the thing nobody sees.

The thing that makes everything else invisible.

 

By Joseph McFadden Sr.

Engineering Fellow, Zebra Technologies  |  Professor of Mechanical Engineering, Fairfield University

McFaddenCAE.com

 

 

The Holistic Analyst Series


 

 

Let me start with an experiment.

 

In the 1990s, psychologists Christopher Chabris and Daniel Simons asked volunteers to watch a short video. Two teams of people — one in white shirts, one in black — were passing basketballs back and forth. The volunteers were given one task: count the number of passes made by the white team.

 

Simple enough. They counted. They focused. And about thirty seconds into the video, a person in a full gorilla suit walked into the middle of the scene, stopped, beat their chest, and walked off.

 

About half the viewers never saw it.

 

Not because the gorilla was hidden. Not because it was subtle. It walked through the center of the frame in plain sight for nine full seconds.

 

The brain, loaded with the counting task, literally could not allocate the resources to perceive it.

 

This is inattentional blindness. And it became one of the most cited experiments in the history of cognitive psychology — because it made a profound point about the human mind. We do not see the world. We see what our brain predicts the world should look like. Everything else is filtered, suppressed, or simply never processed at all.

 

For years, when I have talked about engineering failures — the unit errors, the inherited assumptions, the foundational details that experienced professionals walk right past — I have used the gorilla as the metaphor. The error is in the room. Fully visible. And the brain, loaded with the complexity of the work, simply does not allocate the resources to see it.

 

But something has changed. And it requires a fundamental revision of the metaphor.

 

AI is not the gorilla in the room. AI is the gorilla suit. Put it on anything and it looks like it already passed the check.

 

Chapter One

The Prediction Machine and the Fluency Signal

 

To understand what has changed, you need to understand how the brain decides what deserves attention.

 

Karl Friston at University College London formalized this as the free energy principle. Your brain does not process every bit of incoming information. That would be catastrophically expensive — your brain is already burning twenty percent of your body's energy on two percent of its mass. Instead, it builds models of the world and only allocates full attention when reality violates those models. Friston calls the violation a prediction error.

 

No prediction error? The brain does almost nothing. The experience is smooth, effortless, automatic. Handled.

 

Prediction error? Full mobilization. Attention. Working memory. The fovea snaps to the source of the surprise.

 

This is an extraordinarily efficient system. It is why you can drive a familiar route while your mind is elsewhere, why you can type without looking at the keyboard, why you can read a document and extract meaning without consciously processing every word.

 

But the system has one specific vulnerability. It cannot distinguish between "familiar because it is correct" and "familiar because I have been assuming it is correct." Both feel identical. Both generate zero prediction error. Both pass through awareness without scrutiny.

 

The brain uses fluency — the smoothness of an experience, the absence of friction, the feeling that everything fits — as its primary signal that something has been handled. That it does not need more attention. That it is safe to move on.

 

And this is precisely where artificial intelligence enters the picture.

 

Chapter Two

AI Is Not the Gorilla. AI Is the Gorilla Suit.

 

The original gorilla experiment illustrates a failure of attention. The brain is loaded. Something fully visible goes unprocessed. The gorilla is there — it just does not get seen.

 

That is still happening. That will always happen. The prediction machine is ancient, and it is not going anywhere.

 

But artificial intelligence introduces a categorically different problem. Not a failure of attention. A failure of the signal that triggers attention in the first place.

 

The gorilla in the original experiment was jarring — a large dark shape that did not fit the scene, that violated expectations, that should have generated a significant prediction error in any brain not fully loaded with another task.

 

Artificial intelligence produces output that generates no prediction error at all.

 

It is polished. Confident. Structurally complete. The format is correct. The field labels are right. The sentences are fluent. The argument is internally consistent. The numbers are plausible. It fits the shape of what good work looks like so precisely that the prediction machine encounters it and does exactly what it was designed to do with familiar, expected, correctly-formatted input.

 

It classifies the output as handled and moves on. The gorilla is not in the room. The gorilla suit is on the output.

 

And anything wearing that suit — a material card with the wrong unit system, a financial model built on last year's assumptions, a medical summary that missed a critical qualifier, a legal analysis that cited a superseded precedent — gets waved through. Not because the brain failed to allocate attention. Because the brain allocated attention, looked at the output, found it smooth and complete and familiar, and concluded: nothing to check here.

 

This is a new kind of invisible. And it is more dangerous than the old kind.

 

Chapter Three

Automation Bias: The Research Behind the Risk

 

This is not speculation. The behavioral science behind this failure mode has been accumulating for decades.

 

Researchers call it automation bias — the well-documented tendency to over-trust the outputs of automated systems that are usually right.

 

The effect was first characterized in the context of cockpit automation. Pilots who trusted their automated flight management systems over their own correct judgment. Not in moments of panic — in routine operations. The system said one thing. The pilot's read of the situation said another. The system was wrong. The pilot deferred anyway — because the system had been right so many times before that the prediction model had been updated: outputs from this system are reliable. No need to scrutinize.

 

The same effect has been documented in radiology. Radiologists reviewing scans with the assistance of an AI screening algorithm missed significantly more anomalies when the algorithm failed to flag them — not because the anomalies were harder to see, but because the algorithm's silence functioned as a clearance. The prediction machine received the signal: the system saw nothing. The system is usually right. Move on.

 

In financial analysis. In legal review. In medical diagnosis. In engineering simulation. Wherever a capable, usually-accurate automated system produces output, the same cognitive dynamic plays out. The more capable the system, the more the human brain learns to trust it. The more the brain learns to trust it, the less scrutiny the output receives. The less scrutiny the output receives, the more consequential an error in that output becomes — because it will travel further downstream before anyone catches it.

 

And now, with large language models, we have introduced into professional workflows a system that is not merely usually right about narrow, well-defined tasks. It is fluent across almost everything. Capable enough that its outputs look authoritative regardless of whether they are. Confident enough that it does not distinguish, in its delivery, between a fact it knows with high certainty and a number it has confabulated from statistical patterns.

 

The prediction machine has never had a more convincing gorilla suit to contend with.

 

Chapter Four

Where the Suit Gets Worn

 

Let me make this concrete, because the danger of a broad argument is that it stays abstract long enough for the brain to classify it as handled.

 

In engineering simulation: the material card.

 

An engineer asks an AI system to help populate a finite element model. The AI generates a material card — correct format, right field labels, plausible modulus, reasonable yield strength. And a density. AI is predominantly trained on SI data. The density it gives you for steel is 7,850 — correct in SI, in kilograms per cubic meter. If your model is in tonne-millimeter-second, you need 7.85E-9. Nine orders of magnitude difference. The output looks exactly right. The prediction machine waves it through. The model runs. The contour plots are beautiful. The results are fiction.

 

In financial analysis: the forecast model.

 

An analyst asks an AI system to help build a quarterly revenue projection. The AI produces a clean, well-structured output with logical formulas, reasonable assumptions, and professional formatting. The assumptions were drawn from conditions the AI's training data reflected well. Current conditions have shifted. The model does not know. The output looks right. The prediction machine waves it through.

 

In medicine: the clinical summary.

 

A physician uses an AI tool to summarize a patient's history before a consultation. The summary is fluent, organized, and complete-looking. A medication interaction from three months ago — documented in the record but outside the tool's retrieval window — is absent. The absence looks like information. The prediction machine receives a smooth, complete summary and concludes: nothing missing here.

 

In legal analysis: the precedent.

 

A lawyer uses an AI research tool to find supporting cases. The tool returns several cases, all formatted correctly, all appearing relevant. One is subtly mischaracterized. One was overturned two years ago. The outputs look authoritative. The prediction machine does not know to ask whether they should be verified.

 

In every case, the mechanism is identical. The output is fluent. The prediction machine encounters fluency and produces zero prediction error. The gorilla suit is on. The check does not happen. And the consequence scales with the altitude of the work.

 

Chapter Five

The Compounding Problem: Uncertainty Made Invisible

 

There is a second layer to this that makes it more serious than automation bias alone.

 

Artificial intelligence does not communicate uncertainty the way a careful human expert does.

 

When a senior engineer reviews your simulation and says "I think this looks right" — the hedge is in the language. When a trusted colleague hands you a calculation and says "I'm fairly confident about the boundary conditions but double-check the material properties" — the uncertainty is explicit, stated, available for your brain to register.

 

Artificial intelligence delivers its outputs in a register of confidence that does not distinguish between what it knows well and what it is reconstructing from statistical patterns. The density of steel. The date of a historical event. A legal precedent. A material property from an alloy specification it may or may not have seen accurately in training. All delivered in the same fluent, authoritative, complete-sounding voice.

 

The brain reads certainty into confident delivery. With AI, that assumption is systematically unwarranted. The gorilla suit does not just look like a professional. It sounds like one.

 

This is epistemic uncertainty made invisible. The brain — already primed to suppress the familiar and wave through the fluent — has no signal to work with. The hedges are not there. The qualifications are absent. The output arrives dressed in certainty it may not have earned.

 

Chapter Six

The Turn: AI as Socratic Partner

 

Everything described so far is a risk analysis. It is also an argument for a practice.

 

Because the same tool that makes the gorilla suit also makes one of the most powerful checking mechanisms any of us have ever had access to.

 

The prediction machine suppresses questions. It is designed to. It classifies the familiar as handled and moves attention toward the novel. The deliberate act of asking "wait — is this actually right?" is exactly the kind of conscious interruption that the ancient circuitry resists.

 

An artificial intelligence system does not have a prediction machine. It does not get tired. It does not have a professional investment in the existing model being correct. It does not feel the pressure of a deadline or the cognitive load of managing a complex analysis or the emotional investment in a design passing validation.

 

Ask it to argue that your output is wrong, and it will argue. Ask it to find what you are assuming, and it will find assumptions. Ask it to steelman the contradictory evidence — the signal you were about to explain away — and it will steelman it. Ask it to identify what a skeptic would find first, and it will find something.

 

This is using artificial intelligence to generate deliberate friction rather than accepting fluency.

 

Answer Machine vs. Socratic Partner  Answer machine: 'Generate the material card for structural steel.'  |  Socratic partner: 'What should steel density be in tonne-mm-s? Does this value match? If not, what unit system does it correspond to?'  |  Answer machine: 'Build me a Q3 forecast.'  |  Socratic partner: 'What assumptions is this forecast making that may not hold? What would have to be wrong for this to be significantly off?'

 

The tool is identical in both cases. The cognitive discipline surrounding it is not. When you use AI as an answer machine, you hand the prediction machine a fluent, polished, handled signal. The gorilla suit is on. The blind spot widens. When you use AI as a Socratic partner — asking it to challenge, to find the gap, to argue the other side — you generate the deliberate friction the prediction machine works hardest to suppress. The blind spot narrows.

 

I partner with artificial intelligence every day. Not to be told what to think — to be challenged on how I am thinking. That distinction is everything.

 

Chapter Seven

Building the Discipline

 

How do you make the Socratic habit durable enough to hold under the cognitive load of real work? Three practices.

 

First: name the risk before you use the tool.

 

Not as a ritual disclaimer. As a genuine cognitive interrupt. Before you accept AI output on anything consequential, write one sentence: "What in this output am I about to assume is correct without verifying?" That sentence is not a formality. It is a direct challenge to the prediction machine — it forces the brain into deliberate checking before the fluency signal has a chance to wave the gorilla suit through.

 

Second: ask the Socratic question before the confirming question.

 

The natural impulse is to ask the AI: "Is this right?" That is the wrong question. It invites confirmation of the existing model. The right question is: "What might be wrong about this?" Or: "What assumption is this resting on that I have not verified?" Or: "Argue that this is incorrect — what evidence would you cite?" The question you ask shapes the output you receive. And the output you receive shapes what the prediction machine does with it.

 

Third: build the daily reflective practice.

 

At the end of any working day in which you used AI as part of consequential work, write for five minutes. Not a summary of outputs. A diagnostic on process. Where did the output feel smooth and complete? Where did you accept it without a verification question? What was wearing the gorilla suit today? This is metacognitive training — the systematic development of the internal sensor that catches the fluency signal before it trips the "handled" response. Over time, it rewires the prediction machine. Fluency itself becomes a warning signal rather than a clearance.

 

The Three Practices  1. Before accepting AI output: write 'What am I about to assume is correct without verifying?'  |  2. Ask the Socratic question first: 'What might be wrong?' not 'Is this right?'  |  3. Daily 5-min diagnostic: where did the gorilla suit appear in my work today?

 

 

The Holistic View

 

I have spent decades watching the same cognitive failure play out at every altitude. In classrooms, in engineering organizations, in aerospace programs, in catastrophic system failures that made international news. The mechanism is always the same: the prediction machine encounters something familiar, generates zero prediction error, and classifies the detail as handled. The eyes see it. The brain does not process it.

 

The gorilla was always in the room.

 

What has changed is the suit.

 

Artificial intelligence has introduced into professional work the most convincing fluency generator in human history. Output that looks right, sounds authoritative, and fits every format the prediction machine uses to decide that something has been handled — regardless of whether it is actually right.

 

This is not an argument against artificial intelligence. It is an argument for the most important thing we can bring to the use of any powerful tool: an understanding of what it does to the human cognition surrounding it.

 

The gorilla suit does not fool the person who knows it is a gorilla suit.

 

The next time an AI output looks smooth, complete, and exactly right — that smoothness is not a signal that the work is done. That smoothness is the check.

 

 

 

Joseph McFadden Sr. is an Engineering Fellow at Zebra Technologies leading the MEAS (Mechanical Engineering Analysis & Services) team, and a Professor of Mechanical Engineering at Fairfield University. He has over 44 years of experience in failure analysis, CAE simulation, materials science, and expert witness work, and was one of three pioneers who brought Moldflow simulation technology to North America. He writes and teaches under the "Holistic Analyst" and "Building Intuition Before Equations" brands, exploring the intersection of engineering, neuroscience, and systems thinking.

 

All thoughts and ideas are the author's own, formatted and expanded with Claude AI — not to be told what to write, but to debate and build upon the work.

 

McFaddenCAE.com — Free essays, audiobooks, and tools. No strings. Just food for thought.